1,012 research outputs found

    Theory completion using inverse entailment

    Get PDF
    The main real-world applications of Inductive Logic Programming (ILP) to date involve the "Observation Predicate Learning" (OPL) assumption, in which both the examples and hypotheses define the same predicate. However, in both scientific discovery and language learning potential applications exist in which OPL does not hold. OPL is ingrained within the theory and performance testing of Machine Learning. A general ILP technique called "Theory Completion using Inverse Entailment" (TCIE) is introduced which is applicable to non-OPL applications. TCIE is based on inverse entailment and is closely allied to abductive inference. The implementation of TCIE within Progol5.0 is described. The implementation uses contra-positives in a similar way to Stickel's Prolog Technology Theorem Prover. Progol5.0 is tested on two different data-sets. The first dataset involves a grammar which translates numbers to their representation in English. The second dataset involves hypothesising the function of unknown genes within a network of metabolic pathways. On both datasets near complete recovery of performance is achieved after relearning when randomly chosen portions of background knowledge are removed. Progol5.0's running times for experiments in this paper were typically under 6 seconds on a standard laptop PC

    Inferring the function of genes from synthetic lethal mutations

    Get PDF
    Techniques for detecting synthetic lethal mutations in double gene deletion experiments are emerging as powerful tool for analysing genes in parallel or overlapping pathways with a shared function. This paper introduces a logic-based approach that uses synthetic lethal mutations for mapping genes of unknown function to enzymes in a known metabolic network. We show how such mappings can be automatically computed by a logical learning system called eXtended Hybrid Abductive Inductive Learning (XHAIL)

    Pruning classification rules with instance reduction methods

    Get PDF
    Generating classification rules from data often leads to large sets of rules that need to be pruned. A new pre-pruning technique for rule induction is presented which applies instance reduction before rule induction. Training three rule classifiers on datasets that have been reduced earlier with instance reduction methods leads to a statistically significant lower number of generated rules, without adversely affecting the predictive performance. The search strategies used by the three algorithms vary in terms of both type (depth-first or beam search) and direction (general-to-specific or specific-to-general)
    • …
    corecore